Multi-task Gaussian Process Prediction

نویسندگان

  • Edwin V. Bonilla
  • Kian Ming Adam Chai
  • Christopher K. I. Williams
چکیده

In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on input-dependent features and a “free-form” covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications: a compiler performance prediction problem and an exam score prediction task. Additionally, we make use of GP approximations and properties of our model in order to provide scalability to large data sets.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

It is all in the noise: Efficient multi-task Gaussian process inference with structured residuals

Multi-task prediction methods are widely used to couple regressors or classification models by sharing information across related tasks. We propose a multi-task Gaussian process approach for modeling both the relatedness between regressors and the task correlations in the residuals, in order to more accurately identify true sharing between regressors. The resulting Gaussian model has a covarian...

متن کامل

Transductive Learning for Multi-Task Copula Processes

We tackle the problem of multi-task learning with copula process. Multivariable prediction in spatial and spatialtemporal processes such as natural resource estimation and pollution monitoring have been typically addressed using techniques based on Gaussian processes and co-Kriging. While the Gaussian prior assumption is convenient from analytical and computational perspectives, nature is domin...

متن کامل

SHEF-Lite 2.0: Sparse Multi-task Gaussian Processes for Translation Quality Estimation

We describe our systems for the WMT14 Shared Task on Quality Estimation (subtasks 1.1, 1.2 and 1.3). Our submissions use the framework of Multi-task Gaussian Processes, where we combine multiple datasets in a multi-task setting. Due to the large size of our datasets we also experiment with Sparse Gaussian Processes, which aim to speed up training and prediction by providing sensible sparse appr...

متن کامل

Kernel Multi-task Learning using Task-specific Features

In this paper we are concerned with multitask learning when task-specific features are available. We describe two ways of achieving this using Gaussian process predictors: in the first method, the data from all tasks is combined into one dataset, making use of the task-specific features. In the second method we train specific predictors for each reference task, and then combine their prediction...

متن کامل

Self-measuring Similarity for Multi-task Gaussian Process

Multi-task learning aims at transferring knowledge between similar tasks. The multi-task Gaussian process framework of Bonilla et al. models (incomplete) responses of C data points for R tasks (e.g., the responses are given by an R×C matrix) by using a Gaussian process; the covariance function takes its form as the product of a covariance function defined on input-specific features and an inter...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007